Multilevel Cache Management Based on Application Hints
نویسندگان
چکیده
Multilevel caching is common in many storage configurations, introducing new challenges to cache management. Some of the existing cache replacement policies are designed to improve cache hit rates in the absence of temporal locality, while others avoid replication of data in multiple caches. Additional policies use application hints, an approach that has been shown to significantly improve cache utilization, but only in the highest level of the hierarchy. Our main contribution is a global, non-centralized, dynamic and informed management policy for multiple levels of cache. Karma is aimed at applications, such as databases, which can easily provide hints. These hints are leveraged for making informed allocation and replacement decisions in all cache levels, while preserving exclusive caching, and adjusting to changes in access patterns. This integrated solution is significantly better than other approaches in utilizing the aggregate cache resources. To show the value of Karma, we compare it to LRU, 2Q [17], ARC [22], MultiQ [33], LRU-SP [6] and Demote [32] on database and Zipf traces. For complete evaluation we additionally define extensions of each of these policies to work in a hierarchical cache. On all but very small caches, Karma shows significant improvements over other policies. For example, on a permutation of TPC-H queries, Karma improves over pure LRU by an average of 85%. It adds an average of 50% to the improvement of Demote over LRU and an average of 25% to that of LRU-SP.
منابع مشابه
Karma: Know-It-All Replacement for a Multilevel Cache
Multilevel caching, common in many storage configurations, introduces new challenges to traditional cache management: data must be kept in the appropriate cache and replication avoided across the various cache levels. Some existing solutions focus on avoiding replication across the levels of the hierarchy, working well without information about temporal locality–information missing at all but t...
متن کاملOptimizing Hierarchical Storage Management For Database System
Caching is a classical but effective way to improve system performance. To improve system performance, servers, such as database servers and storage servers, contain significant amounts of memory that act as a fast cache. Meanwhile, as new storage devices such as flash-based solid state drives (SSDs) are added to storage systems over time, using the memory cache is not the only way to improve s...
متن کاملGenerating cache hints for improved program efficiency
One of the new extensions in EPIC architectures are cache hints. On each memory instruction, two kinds of hints can be attached: a source cache hint and a target cache hint. The source hint indicates the true latency of the instruction, which is used by the compiler to improve the instruction schedule. The target hint indicates at which cache levels it is profitable to retain data, allowing to ...
متن کاملGhosts in the Machine: Interfaces for Better Power Management
We observe that the modularity of current power management algorithms often leads to poor results. We propose two new interfaces that pierce the abstraction barrier that inhibits device power management. First, an OS power manager allows applications to query the current power mode of I/O devices to evaluate the performance and energy cost of alternative strategies for reading and writing data....
متن کاملMemory Issues Chair
It is well known that the use of cache memory significantly improves the performance of a computing system. There exists a large quantity of technical literature dealing with analysis and design of cache-based systems. One of the most recent trends in computer technology is to implement multilevel caches. Today, most new processors implement an on-chip dual cache; one for instructions and one f...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2011